Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Multi-version quantization
# Multi-version quantization
Qwen3 4B Q8 0 64k 128k 256k Context GGUF
Apache-2.0
Three quantized versions (Q8_0) of the Qwen 4B model, supporting 64K, 128K, and 256K context lengths, optimized for long-text generation and deep reasoning tasks
Large Language Model
Q
DavidAU
401
2
Featured Recommended AI Models
Empowering the Future, Your AI Solution Knowledge Base
English
简体中文
繁體中文
にほんご
© 2025
AIbase